use of the LLM to perform next-token prediction, and then convert the predicted next token into a classification label.
LLMS popularized zero-shot learning, or "prompt engineering" which is drastically easier to use and more effective than labeling data. You can also retrofit "prompt engineering" onto good old fashion ML like text classifiers.